Skip to main content

Reply to this post | Go Back
View Post [edit]

Poster: nylwgirl Date: Dec 30, 2012 7:43pm
Forum: faqs Subject: how to remove page without robots.txt

How does one remove indexed pages from free subdomains, blogs and journals? Such as old geocities pages and personal blogs for example.

True, you can place a robots.txt file to do that with some such pages. But if the main domain changes hands or shuts down, users lose access and the files disappear, including the ability to place a robots.txt file and any that were there go down as well.

So perhaps better questions are:

(1) Is there an automatic way to instruct the archive to delete files from being archived?
(2) What about pages already indexed where using a robots.txt isn’t an option if the site or domain no longer exists? How can they be removed?